AirBnB is a marketplace for short term rentals that allows you to list part or all of your living space for others to rent. You can rent everything from a room in an apartment to your entire house on AirBnB. Because most of the listings are on a short-term basis, AirBnB has grown to become a popular alternative to hotels. The company itself has grown from it's founding in 2008 to a 30 billion dollar valuation in 2016 and is currently worth more than any hotel chain in the world.
One challenge that hosts looking to rent their living space face is determining the optimal nightly rent price. In many areas, renters are presented with a good selection of listings and can filter on criteria like price, number of bedrooms, room type and more. Since AirBnB is a marketplace, the amount a host can charge on a nightly basis is closely linked to the dynamics of the marketplace. Here's a screenshot of the search experience on AirBnB:
As a host, if we try to charge above market price for a living space we'd like to rent, then renters will select more affordable alternatives which are similar to ours.. If we set our nightly rent price too low, we'll miss out on potential revenue.
One strategy we could use is to:
The process of discovering patterns in existing data to make a prediction is called machine learning. In our case, we want to use data on local listings to predict the optimal price for us to set. In this mission, we'll explore a specific machine learning technique called k-nearest neighbors, which mirrors the strategy we just described. Before we dive further into machine learning and k-nearest neighbors, let's get familiar with the dataset we'll be working with.
While AirBnB doesn't release any data on the listings in their marketplace, a separate group named Inside AirBnB has extracted data on a sample of the listings for many of the major cities on the website. In this post, we'll be working with their dataset from October 3, 2015 on the listings from Washington, D.C., the capital of the United States. Here's a direct link to that dataset. Each row in the dataset is a specific listing that's available for renting on AirBnB in the Washington, D.C. area
To make the dataset less cumbersome to work with, we've removed many of the columns in the original dataset and renamed the file to dc_airbnb.csv. Here are the columns we kept:
Let's read the dataset into Pandas and become more familiar with it.
Description:
In [1]:
import numpy as np
import pandas as pd
import csv
In [3]:
dc_listings = pd.read_csv('dc_airbnb.csv')
print(dc_listings.loc[0])
Here's the strategy we wanted to use:
The k-nearest neighbors algorithm is similar to this strategy. Here's an overview:
There are 2 things we need to unpack in more detail:
In this mission, we'll define what similarity metric we're going to use. Then, we'll implement the k-nearest neighbors algorithm and use it to suggest a price for a new, unpriced listing. We'll use a k value of 5 in this mission. In later missions, we'll learn how to evaluate how good the suggested prices are, how to choose the optimal k value, and more.
The similarity metric works by comparing a fixed set of numerical features, another word for attributes, between 2 observations, or living spaces in our case. When trying to predict a continuous value, like price, the main similarity metric that's used is Euclidean distance. Here's the general formula for Euclidean distance:
$\displaystyle d = \sqrt{(q_1 - p_1)^2 + (q_2 - p_2)^2 + \ldots + (q_n - p_n)^2}$
where $q_1$ to $q_n$ represent the feature values for one observation and $p_1$ to $p_n$ represent the feature values for the other observation. Here's a diagram that breaks down the Euclidean distance between the first 2 observations in the dataset using only the host_listings_count, accommodates, bedrooms, bathrooms, and beds columns:
In this mission, we'll use just one feature in this mission to keep things simple as you become familiar with the machine learning workflow. Since we're only using one feature, this is known as the univariate case. Here's how the formula looks like for the univariate case:
$\displaystyle d = \sqrt{(q_1 - p_1)^2}$
The square root and the squared power cancel and the formula simplifies to:
$ \displaystyle d = \left | q_1 - p_1 \right |$
The living space that we want to rent can accommodate 3 people. Let's first calculate the distance, using just the accommodates feature, between the first living space in the dataset and our own.
Description:
In [5]:
max_people_accommodate = 3
first_distance = np.abs(dc_listings['accommodates'][0] - max_people_accommodate)
print(first_distance)
The Euclidean distance between the first row in the dc_listings Dataframe and our own living space is 1. How do we know if this is high or low? If you look at the Euclidean distance equation itself, the lowest value you can achieve is 0. This happens when the value for the feature is exactly the same for both observations you're comparing. If p1=q1, then $ \displaystyle d = \left | q_1 - p_1 \right |$ which results in $d=0$. The closer to 0 the distance the more similar the living spaces are.
If we wanted to calculate the Euclidean distance between each living space in the dataset and a living space that accommodates 8 people, here's a preview of what that would look like.
Then, we can rank the existing living spaces by ascending distance values, the proxy for similarity.
Description:
In [32]:
def calc_distance(row,acc):
accomodates = acc
distance = sqrt((row['accommodates'] - accomodates)**2)
return distance
In [33]:
distance = dc_listings.apply(lambda row: calc_distance(row,3), axis=1)
In [34]:
dc_listings['distance'] = distance
print(dc_listings['distance'].value_counts())
It looks like there are quite a few, 461 to be precise, living spaces that can accommodate 3 people just like ours. This means the 5 "nearest neighbors" we select after sorting all will have a distance value of 0. If we sort by the distance column and then just select the first 5 living spaces, we would be biasing the result to the ordering of the dataset.
dc_listings[dc_listings["distance"] == 0]["accommodates"] 26 3 34 3 36 3 40 3 44 3 45 3 48 3 65 3 66 3 71 3 75 3 86 3 ...
Let's instead randomize the ordering of the dataset and then sort the Dataframe by the distance column. This way, all of the living spaces with the same number of bedrooms will still be at the top of the Dataframe but will be in random order across the first 461 rows. We've already done the first step of setting the random seed, so we can perform answer checking on our end.
Description:
In [9]:
np.random.seed(1)
shuffled_indexes = np.random.permutation(len(dc_listings))
shuffled_dc_listings = dc_listings.loc[shuffled_indexes]
dc_listings = shuffled_dc_listings
In [11]:
dc_listings.head()
Out[11]:
In [12]:
dc_listings.sort_values(by='distance', inplace=True)
In [13]:
print(dc_listings['price'][:10])
Before we can select the 5 most similar living spaces and compute the average price, we need to clean the price column. Right now, the price column contains comma characters (,) and dollar sign characters and is formatted as a text column instead of a numeric one. We need to remove these values and convert the entire column to the float datatype. Then, we can calculate the average price.
Description:
In [14]:
#Adjusting data
stripped_commas = dc_listings['price'].str.replace(',', '')
stripped_commas = stripped_commas.str.replace('$','')
In [15]:
#Series cast
dc_listings['price'] = stripped_commas.astype(float)
In [16]:
mean_price = dc_listings['price'][:5].mean()
print(mean_price)
Congrats! You've just made your first prediction! Based on the average price of other listings that accommdate 3 people, we should charge 156.6 dollars per night for a guest to stay at our living space. In the next mission, we'll dive into evaluating how good of a prediction this is.
Let's write a more general function that can suggest the optimal price for other values of the accommodates column. The dc_listings Dataframe has information specific to our living space, e.g. the distance column. To save you time, we've reset the dc_listings Dataframe to a clean slate and only kept the data cleaning and randomization we did since those weren't unique to the prediction we were making for our living space.
Description:
In [21]:
# Brought along the changes we made to the `dc_listings` Dataframe.
dc_listings = pd.read_csv('dc_airbnb.csv')
stripped_commas = dc_listings['price'].str.replace(',', '')
stripped_dollars = stripped_commas.str.replace('$', '')
dc_listings['price'] = stripped_dollars.astype('float')
dc_listings = dc_listings.loc[np.random.permutation(len(dc_listings))]
In [35]:
def predict_price(new_listing):
auxdf = pd.DataFrame(data=dc_listings)
distance = auxdf.apply(lambda row: calc_distance(row, new_listing), axis=1)
auxdf['distance'] = distance
auxdf.sort_values(by='distance', inplace=True)
return auxdf['price'][:5].mean()
In [36]:
acc_one = predict_price(1)
acc_two = predict_price(3)
acc_four = predict_price(4)
print("Accommodates 1 person: " + str(acc_one))
print("Accommodates 2 person: " + str(acc_two))
print("Accommodates 4 person: " + str(acc_four))
In this mission, we explored the problem of predicting the optimal price to list an AirBnB rental for based on the price of similar listings on the site. We stepped through the entire machine learning workflow, from selecting a feature to testing the model. To explore the basics of machine learning, we limited ourselves to only using one feature (the univariate case) and a fixed k value of 5.
In the next mission, we'll learn how to evaluate a model's performance.